# Text classification optimization
GIST Embedding V0
MIT
GIST-Embedding-v0 is a sentence embedding model based on sentence-transformers, mainly used for sentence similarity calculation and feature extraction tasks.
Text Embedding English
G
avsolatorio
252.21k
26
Norbert3 Base
Apache-2.0
NorBERT 3 is a next-generation Norwegian language model based on the BERT architecture, supporting both Bokmål and Nynorsk written Norwegian.
Large Language Model
Transformers Other

N
ltg
345
7
Roberta Base Culinary
Apache-2.0
RoBERTa model fine-tuned on culinary-related datasets based on bert-base-uncased, suitable for NLP tasks such as text classification
Large Language Model
Transformers

R
juancavallotti
34
0
FERNET C5
FERNET-C5 is a Czech monolingual BERT base model pretrained on a 93GB large-scale cleaned Czech web corpus (C5).
Large Language Model
Transformers Other

F
fav-kky
219
7
Bert Hateful Memes Expanded
Apache-2.0
A model fine-tuned based on bert-base-uncased for identifying hateful meme text content
Text Classification
Transformers

B
limjiayi
29
4
Distilbert Base Uncased Finetuned Cola
Apache-2.0
A text classification model fine-tuned on the GLUE dataset based on distilbert-base-uncased, designed for sentence acceptability judgment tasks
Text Classification
Transformers

D
jimmyliao
15
0
Distilbert Base Uncased Finetuned Cola 4
Apache-2.0
A fine-tuned model based on DistilBERT for the grammatical acceptability classification task, performing excellently on the evaluation set.
Large Language Model
Transformers

D
fadhilarkan
6
0
Bert Tagalog Base Uncased WWM
Gpl-3.0
A BERT variant trained on large-scale Tagalog text using whole-word masking, suitable for Filipino natural language processing tasks
Large Language Model Other
B
jcblaise
18
0
Bert Base Multilingual Cased Finetuned
This model is a fine-tuned version of bert-base-multilingual-cased on an unknown dataset, primarily used for multilingual text processing tasks.
Large Language Model
Transformers

B
am-shb
17
0
Distilbert Base Uncased Finetuned Cola
Apache-2.0
A lightweight text classification model based on DistilBERT, fine-tuned on the GLUE CoLA task for judging grammatical correctness of sentences
Text Classification
Transformers

D
histinct7002
15
0
Sinbert Large
MIT
SinBERT is a Sinhala pre-trained language model based on the RoBERTa architecture, trained on a large Sinhala monolingual corpus (sin-cc-15M).
Large Language Model
Transformers Other

S
NLPC-UOM
150
6
Sinbert Small
MIT
SinBERT is a model pretrained on a large Sinhala monolingual corpus (sin-cc-15M) based on the RoBERTa architecture, suitable for Sinhala text processing tasks.
Large Language Model
Transformers Other

S
NLPC-UOM
126
4
SGPT 125M Weightedmean Msmarco Specb Bitfit
SGPT-125M is a sentence transformer model optimized with weighted mean and bitfit techniques, focusing on sentence similarity tasks.
Text Embedding
S
Muennighoff
4,086
2
Featured Recommended AI Models